Optimality and convexity conditions for piecewise smooth objective functions

نویسنده

  • Andreas Griewank
چکیده

In the recent book Optimization Stories edited by Martin Groetschel, Robert Mifflin and Claudia Sastizabal [4] describe how nonsmooth optimization arose through a rare collaboration between Soviet and Western scientists in the 1960s and 1970s. In the seemingly simple case of a convex objective function there exists everywhere a nonempty set of subgradients whose shortest element provides a direction of descent. While this set is also convex it may be quite difficult to compute and represent. On the other hand there are classical scenarios where it is quite easy to compute one of the subgradients, or in the more general Lipschitzian case, one generalized gradient. This lead to the widely accepted black-box paradigm, namely that nonsmooth optimization algorithms should be based on an evaluation oracle that provides the function value and just one generalized gradient at any point in the function domain. The resulting bundle methods were described and analyzed in great detail, for example in the classical bools of Jean Baptiste Hirriart Urruty and Claude Lemarechal [3] . However, there exist very simple examples, where due to the codependence of intermediate quantities, such a generalized gradient cannot be computed by the application of generalized differentiation rules. Moreover, even when generalized gradients are available everywhere, the resulting descent algorithms are typically quite slow and suffer from the lack of practical stopping criteria. Ideally, these should be based on computable and reasonably sharp optimality conditions, that are satisfied approximately in a small but open neighborhood of a local minimizer. It will be shown here that this desirable situation can be achieved for objectives in the so called abs-normal form, which also yields enough information to achieve, linear, superlinear, or even quadratic convergence, by successive piecewise linearization provided certain nondegeneracy conditions are satisfied. Any piecewise smooth function that is specified by an evaluation procedure involving smooth elemental functions and piecewise linear functions like min and max can be represented in abs-normal form [1]. This is in particular true for most popular nonsmooth test functions. By an extension of algorithmic, or

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

First- and second-order optimality conditions for piecewise smooth objective functions

Any piecewise smooth function that is specified by an evaluation procedure involving smooth elemental functions and piecewise linear functions like min and max can be represented in the so-called abs-normal form. By an extension of algorithmic, or automatic, differentiation, one can then compute certain first and second order derivative vectors and matrices that represent a local piecewise line...

متن کامل

New optimality conditions for multiobjective fuzzy programming problems

In this paper we study fuzzy multiobjective optimization problems defined for $n$ variables.  Based on a new $p$-dimensional fuzzy stationary-point definition,  necessary  efficiency conditions are obtained.  And we prove that these conditions are also sufficient under new fuzzy generalized convexity notions. Furthermore, the results are obtained under general differentiability hypothesis.

متن کامل

Achieving Linear or Quadratic Convergence on Piecewise Smooth Optimization Problems

Many problems in machine learning involve objective functions that are piecewise smooth [7] due to the occurrence of absolute values mins and maxes in their evaluation procedures. See e.g. [8]. For such function we derived in [3] first order (KKT) and second order (SSC) optimality conditions, which can be checked on the basis of a local piecewise linearization [2] that can be computed in an AD ...

متن کامل

On Sequential Optimality Conditions without Constraint Qualifications for Nonlinear Programming with Nonsmooth Convex Objective Functions

Sequential optimality conditions provide adequate theoretical tools to justify stopping criteria for nonlinear programming solvers. Here, nonsmooth approximate gradient projection and complementary approximate Karush-Kuhn-Tucker conditions are presented. These sequential optimality conditions are satisfied by local minimizers of optimization problems independently of the fulfillment of constrai...

متن کامل

Optimality and Duality for Non-smooth Multiple Objective Semi-infinite Programming

The purpose of this paper is to consider a class of non-smooth multiobjective semi-infinite programming problem. Based on the concepts of local cone approximation, K − directional derivative and K − subdifferential, a new generalization of convexity, namely generalized uniform ( , , , ) K F d α ρ − − convexity, is defined for this problem. For such semi-infinite programming problem, several suf...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016